Goto

Collaborating Authors

 adversarial event


Developing a Taxonomy of Elements Adversarial to Autonomous Vehicles

Saffary, Mohammadali, Inampudi, Nishan, Siegel, Joshua E.

arXiv.org Artificial Intelligence

As highly automated vehicles reach higher deployment rates, they find themselves in increasingly dangerous situations. Knowing that the consequence of a crash is significant for the health of occupants, bystanders, and properties, as well as to the viability of autonomy and adjacent businesses, we must search for more efficacious ways to comprehensively and reliably train autonomous vehicles to better navigate the complex scenarios with which they struggle. We therefore introduce a taxonomy of potentially adversarial elements that may contribute to poor performance or system failures as a means of identifying and elucidating lesser-seen risks. This taxonomy may be used to characterize failures of automation, as well as to support simulation and real-world training efforts by providing a more comprehensive classification system for events resulting in disengagement, collision, or other negative consequences. This taxonomy is created from and tested against real collision events to ensure comprehensive coverage with minimal class overlap and few omissions. It is intended to be used both for the identification of harm-contributing adversarial events and in the generation thereof (to create extreme edge- and corner-case scenarios) in training procedures.


Adversarial Attack for Asynchronous Event-based Data

Lee, Wooju, Myung, Hyun

arXiv.org Artificial Intelligence

Deep neural networks (DNNs) are vulnerable to adversarial examples that are carefully designed to cause the deep learning model to make mistakes. Adversarial examples of 2D images and 3D point clouds have been extensively studied, but studies on event-based data are limited. Event-based data can be an alternative to a 2D image under high-speed movements, such as autonomous driving. However, the given adversarial events make the current deep learning model vulnerable to safety issues. In this work, we generate adversarial examples and then train the robust models for event-based data, for the first time. Our algorithm shifts the time of the original events and generates additional adversarial events. Additional adversarial events are generated in two stages. First, null events are added to the event-based data to generate additional adversarial events. The perturbation size can be controlled with the number of null events. Second, the location and time of additional adversarial events are set to mislead DNNs in a gradient-based attack. Our algorithm achieves an attack success rate of 97.95\% on the N-Caltech101 dataset. Furthermore, the adversarial training model improves robustness on the adversarial event data compared to the original model.


Common Assumptions on Machine Learning Malfunctions Could be Wrong

#artificialintelligence

Deep neural networks are one of the most fundamental aspects of artificial intelligence (AI), as they are used to process images and data through mathematical modeling. They are responsible for some of the greatest advancements in the field, but they also malfunction in various ways. These malfunctions can have either a small to non-existent impact, such as a simple misidentification, to a more dramatic and deadly one, such as a self-driving malfunction. New research coming out of the University of Houston suggests that our common assumptions on these malfunctions may be wrong, which could help evaluate the reliability of the networks in the future. The paper was published in Nature Machine Intelligence in November.


Misinformation or artifact: A new way to think about machine learning: A researcher considers when - and if - we should consider artificial intelligence a failure - IAIDL

#artificialintelligence

They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless -- misidentifying one animal as another -- to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed. A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks. As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network -- a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them. "Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said.